combat game
Diverse Behavior Is What Game AI Needs: Generating Varied Human-Like Playing Styles Using Evolutionary Multi-Objective Deep Reinforcement Learning
Zheng, Yan, Shen, Ruimin, Hao, Jianye, Chen, Yinfeng, Fan, Changjie
Designing artificial intelligence for games (Game AI) has been long recognized as a notoriously challenging task in game industry, as it mainly relies on manual design, requiring plenty of domain knowledge. More frustratingly, even spending a lot of efforts, a satisfying Game AI is still hard to achieve by manual design due to the almost infinite search space. The recent success of deep reinforcement learning (DRL) sheds light on advancing automated game designing, significantly relaxing human competitive intelligent supp ort. However, existing DRL algorithms mostly focus on training a Game AI to win the game rather that the way it wins (style). To bridge the gap, we introduce EMO-DRL, an end-to-end game design framework, leveraging evolutionary algorithm, DRL and multi-objective optimization (MOO) to perform intelligent and automatic game design. Firstly, EMO-DRL proposes the style-oriented learning to bypass manual reward shaping in DRL and directly learns a Game AI with an expected style in an end-to-end fashion. On this basis, the prioritized multi-objective optimization is introduced to achieve more diverse, nature and humanlike Game AI. Large-scale evaluations on a Atari game and a commercial massively mul-tiplayer online game are conducted. The results demonstrat es that EMO-DRL, compared to existing algorithms, achieve better game designs in an intelligent and automatic way.
- Asia > China > Zhejiang Province > Hangzhou (0.04)
- Asia > China > Tianjin Province > Tianjin (0.04)
Fast Heuristic Search for RTS Game Combat Scenarios
Churchill, David (University of Alberta) | Saffidine, Abdallah (Université Paris-Dauphine) | Buro, Michael (University of Alberta)
Heuristic search has been very successful in abstract game domains such as Chess and Go. In video games, however, adoption has been slow due to the fact that state and move spaces are much larger, real-time constraints are harsher, and constraints on computational resources are tighter. In this paper we present a fast search method — Alpha-Beta search for durative moves— that can defeat commonly used AI scripts in RTS game combat scenarios of up to 8 vs. 8 units running on a single core in under 5ms per search episode. This performance is achieved by using standard search enhancements such as transposition tables and iterative deepening, and novel usage of combat AI scripts for sorting moves and state evaluation via playouts. We also present evidence that commonly used combat scripts are highly exploitable — opening the door for a promising line of research on opponent combat modelling.
- North America > Canada > Alberta (0.14)
- Europe > Italy > Piedmont > Turin Province > Turin (0.04)
- Europe > France (0.04)